从传统上讲,放射科医生准备诊断笔记,并与转录师分享。然后,抄写员准备了指参考票据的初步格式报告,最后,放射科医生审查报告,纠正错误并签字。该工作流程在报告中导致重大延迟和错误。在当前的研究工作中,我们专注于NLP技术(例如信息提取(IE)和域特异性知识图(KG))的应用,以自动从放射科医生的命令中生成放射学报告。本文通过从现有的自由文本放射学报告的大型语料库中提取信息来重点介绍每个器官的KG构造。我们开发了一种信息提取管道,将基于规则的,基于模式和基于词典的技术与词汇语义特征相结合,以提取实体和关系。可以从kgs访问简化的丢失信息,以产生病理描述,并因此是放射学报告。使用语义相似性指标评估了生成的病理描​​述,该指标与金标准病理描述显示了97%的相似性。另外,我们的分析表明,我们的IE模块的性能要比放射学域的开放式工具更好。此外,我们还包括放射科医生的手动定性分析,该分析表明80-85%的生成报告是正确编写的,其余部分是正确的。
translated by 谷歌翻译
Generative models have been widely studied in computer vision. Recently, diffusion models have drawn substantial attention due to the high quality of their generated images. A key desired property of image generative models is the ability to disentangle different attributes, which should enable modification towards a style without changing the semantic content, and the modification parameters should generalize to different images. Previous studies have found that generative adversarial networks (GANs) are inherently endowed with such disentanglement capability, so they can perform disentangled image editing without re-training or fine-tuning the network. In this work, we explore whether diffusion models are also inherently equipped with such a capability. Our finding is that for stable diffusion models, by partially changing the input text embedding from a neutral description (e.g., "a photo of person") to one with style (e.g., "a photo of person with smile") while fixing all the Gaussian random noises introduced during the denoising process, the generated images can be modified towards the target style without changing the semantic content. Based on this finding, we further propose a simple, light-weight image editing algorithm where the mixing weights of the two text embeddings are optimized for style matching and content preservation. This entire process only involves optimizing over around 50 parameters and does not fine-tune the diffusion model itself. Experiments show that the proposed method can modify a wide range of attributes, with the performance outperforming diffusion-model-based image-editing algorithms that require fine-tuning. The optimized weights generalize well to different images. Our code is publicly available at https://github.com/UCSB-NLP-Chang/DiffusionDisentanglement.
translated by 谷歌翻译
Cyber intrusion attacks that compromise the users' critical and sensitive data are escalating in volume and intensity, especially with the growing connections between our daily life and the Internet. The large volume and high complexity of such intrusion attacks have impeded the effectiveness of most traditional defence techniques. While at the same time, the remarkable performance of the machine learning methods, especially deep learning, in computer vision, had garnered research interests from the cyber security community to further enhance and automate intrusion detections. However, the expensive data labeling and limitation of anomalous data make it challenging to train an intrusion detector in a fully supervised manner. Therefore, intrusion detection based on unsupervised anomaly detection is an important feature too. In this paper, we propose a three-stage deep learning anomaly detection based network intrusion attack detection framework. The framework comprises an integration of unsupervised (K-means clustering), semi-supervised (GANomaly) and supervised learning (CNN) algorithms. We then evaluated and showed the performance of our implemented framework on three benchmark datasets: NSL-KDD, CIC-IDS2018, and TON_IoT.
translated by 谷歌翻译
One of the biggest challenges of natural language generation (NLG) is the proper handling of named entities. Named entities are a common source of grammar mistakes such as wrong prepositions, wrong article handling, or incorrect entity inflection. Without factoring linguistic representation, such errors are often underrepresented when evaluating on a small set of arbitrarily picked argument values, or when translating a dataset from a linguistically simpler language, like English, to a linguistically complex language, like Russian. However, for some applications, broadly precise grammatical correctness is critical -- native speakers may find entity-related grammar errors silly, jarring, or even offensive. To enable the creation of more linguistically diverse NLG datasets, we release a Corpus of Linguistically Significant Entities (CLSE) annotated by linguist experts. The corpus includes 34 languages and covers 74 different semantic types to support various applications from airline ticketing to video games. To demonstrate one possible use of CLSE, we produce an augmented version of the Schema-Guided Dialog Dataset, SGD-CLSE. Using the CLSE's entities and a small number of human translations, we create a linguistically representative NLG evaluation benchmark in three languages: French (high-resource), Marathi (low-resource), and Russian (highly inflected language). We establish quality baselines for neural, template-based, and hybrid NLG systems and discuss the strengths and weaknesses of each approach.
translated by 谷歌翻译
队列智能或CI是这种新型优化算法之一。自成立以来,在很短的范围内成功地应用于各个领域,并且观察到与同类算法相比,其结果是有效的。到目前为止,在CI及其相关应用程序上还没有进行过这种类型的文献计量分析。因此,对于那些希望将CI提升到新水平的人来说,这篇研究论文将是破冰船。在这篇研究论文中,Scopus中可用的CI出版物通过图表,有关作者,源标题,关键字的网络图进行分析,这些年来,期刊和期刊。在某种程度上,该文献计量学论文以其文献计量详细信息来展示CI,其应用和详细的系统审查。
translated by 谷歌翻译
考虑以下优化问题:给定$ n \ times n $矩阵$ a $和$ \ lambda $,最大化$ \ langle a,u \ lambda u^*\ rangle $,其中$ u $ $ u $在unital Group $ \ mathrm上变化{u}(n)$。这个问题试图通过矩阵大约$ a $,其频谱与$ \ lambda $相同,并且通过将$ \ lambda $设置为适当的对角矩阵,可以恢复矩阵近似问题,例如pca和等级$ k $近似。我们研究了在使用用户的私人数据构建矩阵$ a $的设置中,为这种优化问题设计差异化私有算法的问题。我们给出有效的私有算法,在近似误差上带有上和下限。我们的结果统一并改进了有关私人矩阵近似问题的几项先前的作品。他们依靠格拉斯曼尼亚人的包装/覆盖数量范围扩展到应该具有独立利益的单一轨道。
translated by 谷歌翻译
机器学习(ML)可解释性技术可以揭示数据中的不良模式,这些模型模型开发以做出预测 - 一旦部署就会​​造成危害。但是,如何采取行动解决这些模式并不总是很清楚。在ML与人类计算机互动研究人员,医师和数据科学家之间的合作中,我们开发了GAM Changer,这是第一个互动系统,可帮助域专家和数据科学家轻松,负责任地编辑通用的添加剂模型(GAM)和修复有问题的模式。借助新颖的交互技术,我们的工具将可解释性置于行动中 - 使用户能够分析,验证和使模型行为与知识和价值相结合。医师已经开始使用我们的工具来调查和修复肺炎和败血症的风险预测模型,以及在不同领域工作的7位数据科学家的评估突出显示我们的工具易于使用,满足他们的模型编辑需求,并适合他们当前的工作流程。我们的工具以现代网络技术为基础,在用户的网络浏览器或计算笔记本电脑中本地运行,从而降低了使用的障碍。 GAM Changer可在以下公共演示链接中获得:https://interpret.ml/gam-changer。
translated by 谷歌翻译
通常通过过去的选择来告知机器学习中的评估,例如要使用哪些数据集或指标。该标准化可以使用排行榜对平等基础进行比较,但是随着出现更好的替代方案,评估选择变得不佳。这个问题在自然语言生成中尤其相关,该语言需要不断改善的数据集,指标和人类评估以提出确定性的主张。为了使遵循最佳模型评估实践更加容易,我们介绍了GEMV2。新版本的一代,评估和指标基准为数据集,模型和指标开发人员提供了模块化基础架构,以使彼此受益。GEMV2支持40种记录的数据集中51种语言。所有数据集的模型都可以在线评估,我们的交互式数据卡创建和渲染工具使得在Living Benchmark中添加新数据集变得更加容易。
translated by 谷歌翻译
在均匀的Lipschitzness的简单假设下,即每样本样本梯度均匀地界限的大多数先前的收敛结果是在均匀的私有随机梯度下降(DP-SGD)中得出的。在许多问题,例如使用高斯数据的线性回归中,此假设是不现实的。我们可以通过假设每个样本梯度具有\ textit {样品依赖性}上限,即每样本的Lipschitz常数,而它们本身可能是无限的,那么我们就会放松均匀的唇。当按样本Lipschitz常数具有有限的矩时,我们在凸函数和非凸函数上得出DP-SGD的新收敛结果。此外,我们还提供了针对DP-SGD中选择剪辑标准的原则指导,以使其满足我们轻松的Lipschitzness的凸设置,而无需在Lipschitz常数上做出分配假设。我们通过基准测试数据集的实验来验证建议的有效性。
translated by 谷歌翻译
现有理论预测,数据异质性将降低联邦平均(FedAvg)算法在联合学习中的性能。但是,实际上,简单的FedAvg算法的收敛良好。本文解释了与以前的理论预测相矛盾的FedAvg的看似不合理的有效性。我们发现,在以前的理论分析中,有界梯度差异的关键假设太悲观了,无法表征实际应用中的数据异质性。对于一个简单的二次问题,我们证明存在很大的梯度差异对FedAvg的收敛性没有任何负面影响。在这一观察结果的推动下,我们提出了一个新的数量,最佳的平均漂移,以衡量数据异质性的效果,并明确使用它来提出对FedAvg的新理论分析。我们表明,在许多实际联合训练任务中,最佳的平均漂移几乎为零,而梯度差异可能很大。我们的新分析表明,FedAvg可以在均质和异质数据设置中具有相同的收敛速率,因此可以更好地理解其经验成功。
translated by 谷歌翻译